Skip to content

Fix empty-target self-distillation loss to stay connected to model graph#5572

Open
walawalagoose wants to merge 2 commits intohuggingface:mainfrom
walawalagoose:hjz
Open

Fix empty-target self-distillation loss to stay connected to model graph#5572
walawalagoose wants to merge 2 commits intohuggingface:mainfrom
walawalagoose:hjz

Conversation

@walawalagoose
Copy link
Copy Markdown

@walawalagoose walawalagoose commented Apr 16, 2026

What does this PR do?

This PR fixes an edge case in the experimental SDPO/self-distillation path where an empty self-distillation batch returns a standalone zero tensor instead of a zero loss connected to the model forward graph.

In distillation_only mode, a batch can legitimately contain no valid self-distillation targets, for example when no rollout in the current batch exceeds success_reward_threshold and no environment feedback is used. In the current implementation, this path returns:

torch.tensor(0.0, device=completion_ids.device, requires_grad=True)

Although this tensor has requires_grad=True, it is not connected to model parameters. Under DeepSpeed ZeRO-2, calling backward() on such a graph-disconnected loss can fail during gradient reduction.

This PR fixes the issue by keeping the self-distillation loss connected to the student/teacher forward graph even when the effective target mask is empty, so the loss remains numerically zero but backward stays safe.

This behavior is also closer to the original verl SDPO implementation, where an all-zero mask yields a zero loss through masking rather than through an early return.

A representative failure mode is:

  • SDPOTrainer
  • sdpo_policy_loss_mode="distillation_only"
  • include_environment_feedback=False
  • current batch has no successful rollouts
  • DeepSpeed ZeRO-2 enabled

In that setup, DeepSpeed can fail during backward/all-reduce because the returned zero loss is graph-disconnected.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline, Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

AI writing disclosure

We welcome the use of AI tools to help with contributions. For transparency and to help us improve our review process, please indicate the level of AI involvement in this PR.

  • No AI usage: the PR was written entirely by a human.
  • AI-assisted: some parts were suggested or improved by AI, but the PR was written and reviewed by a human.
  • AI-generated: the PR was mostly or fully generated by an AI tool.

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.


Note

Medium Risk
Changes loss computation behavior in the experimental self-distillation path by always running model forwards even when the target mask is empty, which can affect training performance/metrics but should improve backward stability (e.g., under ZeRO).

Overview
Fixes an edge case in SelfDistillationMixin._compute_self_distillation_loss where batches with an all-zero response_mask previously returned a standalone torch.tensor(0.0).

The loss is now always computed via the normal masked aggregation so it stays connected to the student/teacher forward graph for safe backward()/reduction, and a new self_distillation/empty_target_batch metric is logged to track when this happens.

Reviewed by Cursor Bugbot for commit ee61598. Bugbot is set up for automated code reviews on this repo. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant